6 research outputs found
Learned Dual-View Reflection Removal
Traditional reflection removal algorithms either use a single image as input,
which suffers from intrinsic ambiguities, or use multiple images from a moving
camera, which is inconvenient for users. We instead propose a learning-based
dereflection algorithm that uses stereo images as input. This is an effective
trade-off between the two extremes: the parallax between two views provides
cues to remove reflections, and two views are easy to capture due to the
adoption of stereo cameras in smartphones. Our model consists of a
learning-based reflection-invariant flow model for dual-view registration, and
a learned synthesis model for combining aligned image pairs. Because no dataset
for dual-view reflection removal exists, we render a synthetic dataset of
dual-views with and without reflections for use in training. Our evaluation on
an additional real-world dataset of stereo pairs shows that our algorithm
outperforms existing single-image and multi-image dereflection approaches.Comment: http://sniklaus.com/dualre
Learning Lens Blur Fields
Optical blur is an inherent property of any lens system and is challenging to
model in modern cameras because of their complex optical elements. To tackle
this challenge, we introduce a high-dimensional neural representation of
blurand a practical method for acquiring
it. The lens blur field is a multilayer perceptron (MLP) designed to (1)
accurately capture variations of the lens 2D point spread function over image
plane location, focus setting and, optionally, depth and (2) represent these
variations parametrically as a single, sensor-specific function. The
representation models the combined effects of defocus, diffraction, aberration,
and accounts for sensor features such as pixel color filters and pixel-specific
micro-lenses. To learn the real-world blur field of a given device, we
formulate a generalized non-blind deconvolution problem that directly optimizes
the MLP weights using a small set of focal stacks as the only input. We also
provide a first-of-its-kind dataset of 5D blur fieldsfor smartphone cameras,
camera bodies equipped with a variety of lenses, etc. Lastly, we show that
acquired 5D blur fields are expressive and accurate enough to reveal, for the
first time, differences in optical behavior of smartphone devices of the same
make and model
SunStage: Portrait Reconstruction and Relighting using the Sun as a Light Stage
Outdoor portrait photographs are often marred by the harsh shadows cast under
direct sunlight. To resolve this, one can use post-capture lighting
manipulation techniques, but these methods either require complex hardware
(e.g., a light stage) to capture each individual, or rely on image-based priors
and thus fail to reconstruct many of the subtle facial details that vary from
person to person. In this paper, we present SunStage, a system for accurate,
individually-tailored, and lightweight reconstruction of facial geometry and
reflectance that can be used for general portrait relighting with cast shadows.
Our method only requires the user to capture a selfie video outdoors, rotating
in place, and uses the varying angles between the sun and the face as
constraints in the joint reconstruction of facial geometry, reflectance
properties, and lighting parameters. Aside from relighting, we show that our
reconstruction can be used for applications like reflectance editing and view
synthesis. Results and interactive demos are available at
https://grail.cs.washington.edu/projects/sunstage/.Comment: Project page: https://grail.cs.washington.edu/projects/sunstage